pr oceedings
NT-LLM: A Novel Node Tokenizer for Integrating Graph Structure into Large Language Models
Ji, Yanbiao, Liu, Chang, Chen, Xin, Ding, Yue, Luo, Dan, Li, Mei, Lin, Wenqing, Lu, Hongtao
Graphs are a fundamental data structure for representing relationships in real-world scenarios. With the success of Large Language Models (LLMs) across various natural language processing (NLP) tasks, there has been growing interest in integrating LLMs for graph learning. However, applying LLMs to graph-related tasks poses significant challenges, as these models are not inherently designed to capture the complex structural information present in graphs. Existing approaches address this challenge through two strategies: the chain of tasks approach, which uses Graph Neural Networks (GNNs) to encode the graph structure so that LLMs are relieved from understanding spatial positions; and Graph-to-Text Conversion, which translates graph structures into semantic text representations that LLMs can process. Despite their progress, these methods often struggle to fully preserve the topological information of graphs or require extensive computational resources, limiting their practical applicability. In this work, we introduce Node Tokenizer for Large Language Models (NT-LLM), a novel framework that efficiently encodes graph structures by selecting key nodes as anchors and representing each node based on its relative distance to these anchors. This position-anchored encoding effectively captures the graph topology, enabling enhanced reasoning capabilities in LLMs over graph data. Additionally, we implement a task-specific tuning procedure to further improve structural understanding within LLMs. Through extensive empirical evaluations, NT-LLM demonstrates significant performance improvements across a variety of graph-related tasks.
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.14)
- Europe > Austria > Vienna (0.14)
- North America > United States > District of Columbia > Washington (0.05)
- (9 more...)
SkelCap: Automated Generation of Descriptive Text from Skeleton Keypoint Sequences
Keskin, Ali Emre, Keles, Hacer Yalim
Numerous sign language datasets exist, yet they typically cover only a limited selection of the thousands of signs used globally. Moreover, creating diverse sign language datasets is an expensive and challenging task due to the costs associated with gathering a varied group of signers. Motivated by these challenges, we aimed to develop a solution that addresses these limitations. In this context, we focused on textually describing body movements from skeleton keypoint sequences, leading to the creation of a new dataset. We structured this dataset around AUTSL, a comprehensive isolated Turkish sign language dataset. We also developed a baseline model, SkelCap, which can generate textual descriptions of body movements. This model processes the skeleton keypoints data as a vector, applies a fully connected layer for embedding, and utilizes a transformer neural network for sequence-to-sequence modeling. We conducted extensive evaluations of our model, including signer-agnostic and sign-agnostic assessments. The model achieved promising results, with a ROUGE-L score of 0.98 and a BLEU-4 score of 0.94 in the signer-agnostic evaluation. The dataset we have prepared, namely the AUTSL-SkelCap, will be made publicly available soon.
- Asia > Middle East > Republic of Türkiye > Ankara Province > Ankara (0.04)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
Transformer-based Reasoning for Learning Evolutionary Chain of Events on Temporal Knowledge Graph
Fang, Zhiyu, Lei, Shuai-Long, Zhu, Xiaobin, Yang, Chun, Zhang, Shi-Xue, Yin, Xu-Cheng, Qin, Jingyan
Temporal Knowledge Graph (TKG) reasoning often involves completing missing factual elements along the timeline. Although existing methods can learn good embeddings for each factual element in quadruples by integrating temporal information, they often fail to infer the evolution of temporal facts. This is mainly because of (1) insufficiently exploring the internal structure and semantic relationships within individual quadruples and (2) inadequately learning a unified representation of the contextual and temporal correlations among different quadruples. To overcome these limitations, we propose a novel Transformer-based reasoning model (dubbed ECEformer) for TKG to learn the Evolutionary Chain of Events (ECE). Specifically, we unfold the neighborhood subgraph of an entity node in chronological order, forming an evolutionary chain of events as the input for our model. Subsequently, we utilize a Transformer encoder to learn the embeddings of intra-quadruples for ECE. We then craft a mixed-context reasoning module based on the multi-layer perceptron (MLP) to learn the unified representations of inter-quadruples for ECE while accomplishing temporal knowledge reasoning. In addition, to enhance the timeliness of the events, we devise an additional time prediction task to complete effective temporal information within the learned unified representation. Extensive experiments on six benchmark datasets verify the state-of-the-art performance and the effectiveness of our method.
- Asia > China > Beijing > Beijing (0.05)
- North America > United States > District of Columbia > Washington (0.05)
- Europe > France (0.05)
- North America > United States > New York > New York County > New York City (0.04)
SpherE: Expressive and Interpretable Knowledge Graph Embedding for Set Retrieval
Li, Zihao, Ao, Yuyi, He, Jingrui
Knowledge graphs (KGs), which store an extensive number of relational Knowledge Graphs (KGs), e.g., the widely used YAGO [23], Freebase facts (h,,), serve various applications. While [3], DBpedia [2], WordNet [19], have been serving multiple many downstream tasks highly rely on the expressive modeling and downstream applications such as information retrieval [30], recommender predictive embedding of KGs, most of the current KG representation systems [36, 38], natural language processing [32, 34], learning methods, where each entity is embedded as a vector in the multimedia network analysis [31, 35], question answering [14, 16], Euclidean space and each relation is embedded as a transformation, fact checking [15, 17]. To utilize the extensive amount of knowledge follow an entity ranking protocol. On one hand, such an embedding in the KG, many works have studied Knowledge Graph Embedding design cannot capture many-to-many relations. On the other hand, (KGE), which learns low-dimensional representations of entities in many retrieval cases, the users wish to get an exact set of answers and relations of them [10, 21, 26, 27, 29]. Starting from TransE [4], without any ranking, especially when the results are expected to be a group of translation-based methods TransH [28], TransR [13], precise, e.g., which genes cause an illness. Such scenarios are commonly TransD [9], TorusE [6] model the relation as translations between referred to as "set retrieval". This work presents a pioneering entities in the embedding space. However, the translation-based study on the KG set retrieval problem.
- North America > United States > California > San Francisco County > San Francisco (0.28)
- North America > United States > New York > New York County > New York City (0.14)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.14)
- (23 more...)
Mitigating Heterogeneity among Factor Tensors via Lie Group Manifolds for Tensor Decomposition Based Temporal Knowledge Graph Embedding
Li, Jiang, Su, Xiangdong, Gong, Yeyun, Gao, Guanglai
Recent studies have highlighted the effectiveness of tensor decomposition methods in the Temporal Knowledge Graphs Embedding (TKGE) task. However, we found that inherent heterogeneity among factor tensors in tensor decomposition significantly hinders the tensor fusion process and further limits the performance of link prediction. To overcome this limitation, we introduce a novel method that maps factor tensors onto a unified smooth Lie group manifold to make the distribution of factor tensors approximating homogeneous in tensor decomposition. We provide the theoretical proof of our motivation that homogeneous tensors are more effective than heterogeneous tensors in tensor fusion and approximating the target for tensor decomposition based TKGE methods. The proposed method can be directly integrated into existing tensor decomposition based TKGE methods without introducing extra parameters. Extensive experiments demonstrate the effectiveness of our method in mitigating the heterogeneity and in enhancing the tensor decomposition based TKGE models.
- Asia > China > Inner Mongolia > Hohhot (0.04)
- North America > United States > Texas (0.04)
- North America > United States > New York > Richmond County > New York City (0.04)
- (17 more...)
Temporal Knowledge Graph Completion with Time-sensitive Relations in Hypercomplex Space
Cai, Li, Mao, Xin, Wang, Zhihong, Zhao, Shangqing, Zhou, Yuhao, Wu, Changxu, Lan, Man
Temporal knowledge graph completion (TKGC) aims to fill in missing facts within a given temporal knowledge graph at a specific time. Existing methods, operating in real or complex spaces, have demonstrated promising performance in this task. This paper advances beyond conventional approaches by introducing more expressive quaternion representations for TKGC within hypercomplex space. Unlike existing quaternion-based methods, our study focuses on capturing time-sensitive relations rather than time-aware entities. Specifically, we model time-sensitive relations through time-aware rotation and periodic time translation, effectively capturing complex temporal variability. Furthermore, we theoretically demonstrate our method's capability to model symmetric, asymmetric, inverse, compositional, and evolutionary relation patterns. Comprehensive experiments on public datasets validate that our proposed approach achieves state-of-the-art performance in the field of TKGC.
- Asia > South Korea (0.05)
- North America > Canada > Ontario > Toronto (0.04)
- Asia > Taiwan > Taiwan Province > Taipei (0.04)
- (16 more...)
Shortest Path Networks for Graph Property Prediction
Abboud, Ralph, Dimitrov, Radoslav, Ceylan, İsmail İlkan
Most graph neural network models rely on a particular message passing paradigm, where the idea is to iteratively propagate node representations of a graph to each node in the direct neighborhood. While very prominent, this paradigm leads to information propagation bottlenecks, as information is repeatedly compressed at intermediary node representations, which causes loss of information, making it practically impossible to gather meaningful signals from distant nodes. To address this, we propose shortest path message passing neural networks, where the node representations of a graph are propagated to each node in the shortest path neighborhoods. In this setting, nodes can directly communicate between each other even if they are not neighbors, breaking the information bottleneck and hence leading to more adequately learned representations. Our framework generalizes message passing neural networks, resulting in a class of more expressive models, including some recent state-of-the-art models. We verify the capacity of a basic model of this framework on dedicated synthetic experiments, and on real-world graph classification and regression benchmarks, and obtain state-of-the art results.
Temporal Knowledge Graph Completion using Box Embeddings
Messner, Johannes, Abboud, Ralph, Ceylan, İsmail İlkan
Knowledge graph completion is the task of inferring missing facts based on existing data in a knowledge graph. Temporal knowledge graph completion (TKGC) is an extension of this task to temporal knowledge graphs, where each fact is additionally associated with a time stamp. Current approaches for TKGC primarily build on existing embedding models which are developed for (static) knowledge graph completion, and extend these models to incorporate time, where the idea is to learn latent representations for entities, relations, and timestamps and then use the learned representations to predict missing facts at various time steps. In this paper, we propose BoxTE, a box embedding model for TKGC, building on the static knowledge graph embedding model BoxE. We show that BoxTE is fully expressive, and possesses strong inductive capacity in the temporal setting. We then empirically evaluate our model and show that it achieves state-of-the-art results on several TKGC benchmarks.
Geometric Models for (Temporally) Attributed Description Logics
Bourgaux, Camille, Ozaki, Ana, Pan, Jeff Z.
In the search for knowledge graph embeddings that could capture ontological knowledge, geometric models of existential rules have been recently introduced. It has been shown that convex geometric regions capture the so-called quasi-chained rules. Attributed description logics (DL) have been defined to bridge the gap between DL languages and knowledge graphs, whose facts often come with various kinds of annotations that may need to be taken into account for reasoning. In particular, temporally attributed DLs are enriched by specific attributes whose semantics allows for some temporal reasoning. Considering that geometric models and (temporally) attributed DLs are promising tools designed for knowledge graphs, this paper investigates their compatibility, focusing on the attributed version of a Horn dialect of the DL-Lite family. We first adapt the definition of geometric models to attributed DLs and show that every satisfiable ontology has a convex geometric model. Our second contribution is a study of the impact of temporal attributes. We show that a temporally attributed DL may not have a convex geometric model in general but we can recover geometric satisfiability by imposing some restrictions on the use of the temporal attributes.
- Europe > United Kingdom > Scotland > City of Edinburgh > Edinburgh (0.04)
- Europe > Norway > Western Norway > Vestland > Bergen (0.04)
- Europe > France > Île-de-France > Paris > Paris (0.04)
- Research Report (0.90)
- Overview (0.54)
Modeling Fine-Grained Entity Types with Box Embeddings
Onoe, Yasumasa, Boratko, Michael, Durrett, Greg
Neural entity typing models typically represent entity types as vectors in a high-dimensional space, but such spaces are not well-suited to modeling these types' complex interdependencies. We study the ability of box embeddings, which represent entity types as d-dimensional hyperrectangles, to represent hierarchies of fine-grained entity type labels even when these relationships are not defined explicitly in the ontology. Our model represents both types and entity mentions as boxes. Each mention and its context are fed into a BERT-based model to embed that mention in our box space; essentially, this model leverages typological clues present in the surface text to hypothesize a type representation for the mention. Soft box containment can then be used to derive probabilities, both the posterior probability of a mention exhibiting a given type and the conditional probability relations between types themselves. We compare our approach with a strong vector-based typing model, and observe state-of-the-art performance on several entity typing benchmarks. In addition to competitive typing performance, our box-based model shows better performance in prediction consistency (predicting a supertype and a subtype together) and confidence (i.e., calibration), implying that the box-based model captures the latent type hierarchies better than the vector-based model does.
- North America > United States > California (0.14)
- North America > United States > Texas > Travis County > Austin (0.04)
- North America > United States > Massachusetts > Hampshire County > Amherst (0.04)